76 research outputs found

    Complexity of n-Queens completion (extended abstract)

    Get PDF
    The n-Queens problem is to place n chess queens on an n by n chessboard so that no two queens are on the same row, column or diagonal. The n-Queens Completion problem is a variant, dating to 1850, in which some queens are already placed and the solver is asked to place the rest, if possible. We show that n-Queens Completion is both NP-Complete and #P-Complete. A corollary is that any non-attacking arrangement of queens can be included as a part of a solution to a larger n-Queens problem. We introduce generators of random instances for n-Queens Completion and the closely related Blocked n-Queens and Excluded Diagonals Problem. We describe three solvers for these problems, and empirically analyse the hardness of randomly generated instances. For Blocked n-Queens and the Excluded Diagonals Problem, we show the existence of a phase transition associated with hard instances as has been seen in other NP-Complete problems, but a natural generator for n-Queens Completion did not generate consistently hard instances. The significance of this work is that the n-Queens problem has been very widely used as a benchmark in Artificial Intelligence, but conclusions on it are often disputable because of the simple complexity of the decision problem. Our results give alternative benchmarks which are hard theoretically and empirically, but for which solving techniques designed for n-Queens need minimal or no change

    Generalized support and formal development of constraint propagators

    Get PDF
    Constraint programming is a family of techniques for solving combinatorial problems, where the problem is modelled as a set of decision variables (typically with finite domains) and a set of constraints that express relations among the decision variables. One key concept in constraint programming is propagation: reasoning on a constraint or set of constraints to derive new facts, typically to remove values from the domains of decision variables. Specialized propagation algorithms (propagators) exist for many classes of constraints.The concept of support is pervasive in the design of propagators. Traditionally, when a domain value ceases to have support, it may be removed because it takes part in no solutions. Arc-consistency algorithms such as AC2001 make use of support in the form of a single domain value. GAC algorithms such as GAC-Schema use a tuple of values to support each literal. We generalize these notions of support in two ways. First, we allow a set of tuples to act as support. Second, the supported object is generalized from a set of literals (GAC-Schema) to an entire constraint or any part of it.We design a methodology for developing correct propagators using generalized support. A constraint is expressed as a family of support properties, which may be proven correct against the formal semantics of the constraint. We show how to derive correct propagators from the constructive proofs of the support properties. The framework is carefully designed to allow efficient algorithms to be produced. Derived algorithms may make use of dynamic literal triggers or watched literals for efficiency. Finally, three case studies of deriving efficient algorithms are given

    Complexity of n-Queens Completion

    Get PDF
    The n-Queens problem is to place n chess queens on an n by n chessboard so that no two queens are on the same row, column or diagonal. The n-Queens Completion problem is a variant, dating to 1850, in which some queens are already placed and the solver is asked to place the rest, if possible. We show that n-Queens Completion is both NP-Complete and #P-Complete. A corollary is that any non-attacking arrangement of queens can be included as a part of a solution to a larger n-Queens problem. We introduce generators of random instances for n-Queens Completion and the closely related Blocked n-Queens and Excluded Diagonals Problem. We describe three solvers for these problems, and empirically analyse the hardness of randomly generated instances. For Blocked n-Queens and the Excluded Diagonals Problem, we show the existence of a phase transition associated with hard instances as has been seen in other NP-Complete problems, but a natural generator for n-Queens Completion did not generate consistently hard instances. The significance of this work is that the n-Queens problem has been very widely used as a benchmark in Artificial Intelligence, but conclusions on it are often disputable because of the simple complexity of the decision problem. Our results give alternative benchmarks which are hard theoretically and empirically, but for which solving techniques designed for n-Queens need minimal or no change

    How people visually represent discrete constraint problems

    Get PDF
    Problems such as timetabling or personnel allocation can be modeled and solved using discrete constraint programming languages. However, while existing constraint solving software solves such problems quickly in many cases, these systems involve specialized languages that require significant time and effort to learn and apply. These languages are typically text-based and often difficult to interpret and understand quickly, especially for people without engineering or mathematics backgrounds. Visualization could provide an alternative way to model and understand such problems. Although many visual programming languages exist for procedural languages, visual encoding of problem specifications has not received much attention. Future problem visualization languages could represent problem elements and their constraints unambiguously, but without unnecessary cognitive burdens for those needing to translate their problem's mental representation into diagrams.As a first step towards such languages, we executed a study that catalogs how people represent constraint problems graphically. We studied three groups with different expertise: non-computer scientists, computer scientists and constraint programmers and analyzed their marks on paper (e.g., arrows), gestures (e.g., pointing) and the mappings to problem concepts (e.g., containers, sets). We provide foundations to guide future tool designs allowing people to effectively grasp, model and solve problems through visual representations

    Breaking conditional symmetry in automated constraint modelling with CONJURE

    Get PDF
    Many constraint problems contain symmetry, which can lead to redundant search. If a partial assignment is shown to be invalid, we are wasting time if we ever consider a symmetric equivalent of it. A particularly important class of symmetries are those introduced by the constraint modelling process: model symmetries. We present a systematic method by which the automated constraint modelling tool CONJURE can break conditional symmetry as it enters a model during refinement. Our method extends, and is compatible with, our previous work on automated symmetry breaking in CONJURE. The result is the automatic and complete removal of model symmetries for the entire problem class represented by the input specification. This applies to arbitrarily nested conditional symmetries and represents a significant step forward for automated constraint modelling

    Topiramate is more effective than acetazolamide at lowering intracranial pressure

    Get PDF
    BACKGROUND The management of idiopathic intracranial hypertension focuses on reducing intracranial pressure to preserve vision and reduce headaches. There is sparse evidence to support the use of some of the drugs commonly used to manage idiopathic intracranial hypertension, therefore we propose to evaluate the efficacy of these drugs at lowering intracranial pressure in healthy rats. METHODS We measured intracranial pressure in female rats before and after subcutaneous administration of acetazolamide, topiramate, furosemide, amiloride and octreotide at clinical doses (equivalent to a single human dose) and high doses (equivalent to a human daily dose). In addition, we measured intracranial pressure after oral administration of acetazolamide and topiramate. RESULTS At clinical and high doses, subcutaneous administration of topiramate lowered intracranial pressure by 32% ( p = 0.0009) and 21% ( p = 0.015) respectively. There was no significant reduction in intracranial pressure noted with acetazolamide, furosemide, amiloride or octreotide at any dose. Oral administration of topiramate significantly lowered intracranial pressure by 22% ( p = 0.018), compared to 5% reduction with acetazolamide ( p = >0.999). CONCLUSION Our in vivo studies demonstrated that both subcutaneous and oral administration of topiramate significantly lowers intracranial pressure. Other drugs tested, including acetazolamide, did not significantly reduce intracranial pressure. Future clinical trials evaluating the efficacy and side effects of topiramate in idiopathic intracranial hypertension patients would be of interest

    Biventricular pacemaker therapy improves exercise capacity in patients with non‐obstructive hypertrophic cardiomyopathy via augmented diastolic filling on exercise

    Get PDF
    Aims Treatment options for patients with non‐obstructive hypertrophic cardiomyopathy (HCM) are limited. We sought to determine whether biventricular (BiV) pacing improves exercise capacity in HCM patients, and whether this is via augmented diastolic filling. Methods and results Thirty‐one patients with symptomatic non‐obstructive HCM were enrolled. Following device implantation, patients underwent detailed assessment of exercise diastolic filling using radionuclide ventriculography in BiV and sham pacing modes. Patients then entered an 8‐month crossover study of BiV and sham pacing in random order, to assess the effect on exercise capacity [peak oxygen consumption (VO2)]. Patients were grouped on pre‐specified analysis according to whether left ventricular end‐diastolic volume increased (+LVEDV) or was unchanged/decreased (–LVEDV) with exercise at baseline. Twenty‐nine patients (20 male, mean age 55 years) completed the study. There were 14 +LVEDV patients and 15 –LVEDV patients. Baseline peak VO2 was lower in –LVEDV patients vs. +LVEDV patients (16.2 ± 0.9 vs. 19.9 ± 1.1 mL/kg/min, P = 0.04). BiV pacing significantly increased exercise ΔLVEDV (P = 0.004) and Δstroke volume (P = 0.008) in –LVEDV patients, but not in +LVEDV patients. Left ventricular ejection fraction and end‐systolic elastance did not increase with BiV pacing in either group. This translated into significantly greater improvements in exercise capacity (peak VO2 + 1.4 mL/kg/min, P = 0.03) and quality of life scores (P = 0.02) in –LVEDV patients during the crossover study. There was no effect on left ventricular mechanical dyssynchrony in either group. Conclusion Symptomatic patients with non‐obstructive HCM may benefit from BiV pacing via augmentation of diastolic filling on exercise rather than contractile improvement. This may be due to relief of diastolic ventricular interaction. Clinical Trial Registration: ClinicalTrials.gov NCT00504647

    A framework for constraint based local search using ESSENCE

    Get PDF
    Structured Neighbourhood Search (SNS) is a framework for constraint-based local search for problems expressed in the Essence abstract constraint specification language. The local search explores a structured neighbourhood, where each state in the neighbourhood preserves a high level structural feature of the problem. SNS derives highly structured problem-specific neighbourhoods automatically and directly from the features of the ESSENCE specification of the problem. Hence, neighbourhoods can represent important structural features of the problem, such as partitions of sets, even if that structure is obscured in the low-level input format required by a constraint solver. SNS expresses each neighbourhood as a constrained optimisation problem, which is solved with a constraint solver. We have implemented SNS, together with automatic generation of neighbourhoods for high level structures, and report high quality results for several optimisation problems

    Bayesian spatial NBDA for diffusion data with home-base coordinates

    Get PDF
    Network-based diffusion analysis (NBDA) is a statistical method that allows the researcher to identify and quantify a social influence on the spread of behaviour through a population. Hitherto, NBDA analyses have not directly modelled spatial population structure. Here we present a spatial extension of NBDA, applicable to diffusion data where the spatial locations of individuals in the population, or of their home bases or nest sites, are available. The method is based on the estimation of inter-individual associations (for association matrix construction) from the mean inter-point distances as represented on a spatial point pattern of individuals, nests or home bases. We illustrate the method using a simulated dataset, and show how environmental covariates (such as that obtained from a satellite image, or from direct observations in the study area) can also be included in the analysis. The analysis is conducted in a Bayesian framework, which has the advantage that prior knowledge of the rate at which the individuals acquire a given task can be incorporated into the analysis. This method is especially valuable for studies for which detailed spatially structured data, but no other association data, is available. Technological advances are making the collection of such data in the wild more feasible: for example, bio-logging facilitates the collection of a wide range of variables from animal populations in the wild. We provide an R package, spatialnbda, which is hosted on the Comprehensive R Archive Network (CRAN). This package facilitates the construction of association matrices with the spatial x and y coordinates as the input arguments, and spatial NBDA analyses

    Study protocol: Comparison of different risk prediction modelling approaches for COVID-19 related death using the OpenSAFELY platform

    Get PDF
    On March 11th 2020, the World Health Organization characterised COVID-19 as a pandemic. Responses to containing the spread of the virus have relied heavily on policies involving restricting contact between people. Evolving policies regarding shielding and individual choices about restricting social contact will rely heavily on perceived risk of poor outcomes from COVID-19. In order to make informed decisions, both individual and collective, good predictive models are required.   For outcomes related to an infectious disease, the performance of any risk prediction model will depend heavily on the underlying prevalence of infection in the population of interest. Incorporating measures of how this changes over time may result in important improvements in prediction model performance.  This protocol reports details of a planned study to explore the extent to which incorporating time-varying measures of infection burden over time improves the quality of risk prediction models for COVID-19 death in a large population of adult patients in England. To achieve this aim, we will compare the performance of different modelling approaches to risk prediction, including static cohort approaches typically used in chronic disease settings and landmarking approaches incorporating time-varying measures of infection prevalence and policy change, using COVID-19 related deaths data linked to longitudinal primary care electronic health records data within the OpenSAFELY secure analytics platform.</ns4:p
    • 

    corecore